摘要 :
To develop a fully complete set of errors associated with modeling and simulation, it is necessary to express every error that could impact the accuracy of a computational model' s prediction of the real world system (i.e., a set ...
展开
To develop a fully complete set of errors associated with modeling and simulation, it is necessary to express every error that could impact the accuracy of a computational model' s prediction of the real world system (i.e., a set of errors that is theoretically complete)and to develop a means to assess each error (i.e., making the set practically complete). As a first step toward this goal, this paper focuses on developing a theoretically complete set of errors that, if accounted for, would result in the correct prediction of reality. In order to derive this theoretically complete set of errors, a three-step process is followed. First, a generic scenario is introduced which is defined by a set of functions and inputs common to many, if not most, applications in modeling and simulation. Second, using only these functions and inputs, an equation for the total error is defined such that correcting the model's prediction to account for the error would result in a correct prediction of reality. Finally, the equation for total error is expanded by introducing terms from the generic scenario. This results in a decomposition of the total error into a set of thirteen distinct difference terms, each of which is defined as an error and many of which are closely related to current practices in verification, validation, and uncertainty quantification. These thirteen errors represent a theoretically complete set.
收起
摘要 :
The objective of this study is to present a novel method of level-2 uncertainty analysis in risk assessment by means of uncertainty theory. In the proposed method, aleatory uncertainty is characterized by probability distributions...
展开
The objective of this study is to present a novel method of level-2 uncertainty analysis in risk assessment by means of uncertainty theory. In the proposed method, aleatory uncertainty is characterized by probability distributions, whose parameters are affected by epistemic uncertainty. These parameters are described as uncertain variables. For monotone risk models, such as fault trees or event trees, the uncertainty is propagated analytically based on the operational rules of uncertain variables. For non-monotone risk models, we propose a simulation-based method for uncertainty propagation. Three indexes, i.e., average risk, value-at-risk and bounded value-at-risk, are defined for risk-informed decision making in the level-2 uncertainty setting. Two numerical studies and an application on a real example from literature are worked out to illustrate the developed method. A comparison is made to some commonly used uncertainty analysis methods, e.g., the ones based on probability theory and evidence theory.
收起
摘要 :
Uncertainty analysis plays a significant role in risk assessment, which consists of two tasks: uncertainty expressions of input variables in the model and their propagations through the model built. We aim to provide, in fault tre...
展开
Uncertainty analysis plays a significant role in risk assessment, which consists of two tasks: uncertainty expressions of input variables in the model and their propagations through the model built. We aim to provide, in fault tree analysis context, suitable methods of expression and propagation of uncertainty corresponding to different stages of knowledge that the risk analyst own, where frequentist probability is used to express the aleatory uncertainty and uncertainty theory is used to represent the epistemic uncertainty. To do so, we divide the analyst's knowledge state into five different stages, and develop the correct expression of uncertainty corresponding to each stage, where different combinations of probability and uncertainty are considered. Methods of propagation of these uncertainties through fault trees are further developed, where we introduce probability distributions, uncertainty distributions, newly-developed level-2 distributions, and the varying time t into the operational law for Boolean uncertain random system to better address the needs of practical risk assessments. A case study is conducted to show the differences in the propagation methods corresponding to various knowledge stages, and the results highlight that the proposed methods are effective and could deliver clear messages to decision makers.
收起
摘要 :
In this paper, we have reviewed various approaches to defining resilience and the assessment of resilience. We have seen that while resilience is a useful concept, its diversity in usage complicates its interpretation and measurem...
展开
In this paper, we have reviewed various approaches to defining resilience and the assessment of resilience. We have seen that while resilience is a useful concept, its diversity in usage complicates its interpretation and measurement. In this paper, we have proposed a resilience analysis framework and a metric for measuring resilience. Our analysis framework consists of system identification, resilience objective setting, vulnerability analysis, and stakeholder engagement. The implementation of this framework is focused on the achievement of three resilience capacities: adaptive capacity, absorptive capacity, and recoverability. These three capacities also form the basis of our proposed resilience factor and uncertainty-weighted resilience metric. We have also identified two important unresolved discussions emerging in the literature: the idea of resilience as an epistemological versus inherent property of the system, and design for ecological versus engineered resilience in socio-technical systems. While we have not resolved this tension, we have shown that our framework and metric promote the development of methodologies for investigating "deep" uncertainties in resilience assessment while retaining the use of probability for expressing uncertainties about highly uncertain, unforeseeable, or unknowable hazards in design and management activities.
收起
摘要 :
In the past, uncertainty analysis in soil research was often reduced to consideration of statistical variation in numerical data relating to model parameters, model inputs or field measurements. The simplified conceptual approach ...
展开
In the past, uncertainty analysis in soil research was often reduced to consideration of statistical variation in numerical data relating to model parameters, model inputs or field measurements. The simplified conceptual approach used by modellers in calibration studies can be misleading, because it relates mainly to error minimisation in regression analysis and is reductionist in nature. In this study, a large number of added uncertainties are identified in a more comprehensive attention to the problem. Uncertainties in soil analysis include errors in geometry, position and polygon attributes. The impacts of multiple error sources are described, including covariate error, model error and laboratory analytical error. In particular, the distinction is made between statistical variability (aleatory uncertainty) and lack of information (epistemic uncertainty). Examples of experimental uncertainty analysis are provided and discussed, including reference to error disaggregation and geostatistics, and a systems-based analytic framework is proposed. It is concluded that a more comprehensive and global approach to uncertainty analysis is needed, especially in the context of developing a future soils modelling process for incorporation of all known sources of uncertainty.
收起
摘要 :
Objectives: Decision makers adopt health technologies based on health economic models that are subject to uncertainty. In an ideal world, these models parameterize all uncertainties and reflect them in the cost-effectiveness proba...
展开
Objectives: Decision makers adopt health technologies based on health economic models that are subject to uncertainty. In an ideal world, these models parameterize all uncertainties and reflect them in the cost-effectiveness probability and risk associated with the adoption. In practice, uncertainty assessment is often incomplete, potentially leading to suboptimal reimbursement recommendations and risk management. This study examines the feasibility of comprehensive uncertainty assessment in health economic models.
收起
摘要 :
Constructive epistemic modeling is the idea that our understanding of a natural system through a scientific model is a mental construct that continually develops through learning about and from the model. Using hierarchical Bayesi...
展开
Constructive epistemic modeling is the idea that our understanding of a natural system through a scientific model is a mental construct that continually develops through learning about and from the model. Using hierarchical Bayesian model averaging (BMA), this study shows that segregating different uncertain model components through a BMA tree of posterior model probability, model prediction, within-model variance, between-model variance and total model variance serves as a learning tool. First, the BMA tree of posterior model probabilities permits the comparative evaluation of the candidate propositions of each uncertain model component. Second, systemic model dissection is imperative for understanding the individual contribution of each uncertain model component to the model prediction and variance. Third, the hierarchical representation of the between-model variance facilitates the prioritization of the contribution of each uncertain model component to the overall model uncertainty. We illustrate these concepts using the groundwater flow model of a siliciclastic aquifer-fault system.We consider four uncertain model components. With respect to geological structure uncertainty, we consider three methods for reconstructing the hydrofacies architecture of the aquifer-fault system, and two formation dips. We consider two uncertain boundary conditions, each having two candidate propositions. Through combinatorial design, these four uncertain model components with their candidate propositions result in 24 base models. The study shows that hierarchical BMA analysis helps in advancing knowledge about the model rather than forcing the model to fit a particularly understanding or merely averaging several candidate models.
收起
摘要 :
The following techniques for uncertainty and sensitivity analysis are briefly summarized: Monte Carlo analysis, differential analysis, response surface methodology, Fourier amplitude sensitivity test, Sobol' variance decomposition...
展开
The following techniques for uncertainty and sensitivity analysis are briefly summarized: Monte Carlo analysis, differential analysis, response surface methodology, Fourier amplitude sensitivity test, Sobol' variance decomposition, and fast probability integration. Desirable features of Monte Carlo analysis in conjunction with Latin hypercube sampling are described in discussions of the following topics: (ⅰ) properties of random, stratified and Latin hypercube sampling, (ⅱ) comparisons of random and Latin hypercube sampling, (ⅲ) operations involving Latin hypercube sampling (i.e. correlation control, reweighting of samples to incorporate changed distributions, replicated sampling to test reproducibility of results), (ⅳ) uncertainty analysis (i.e. cumulative distribution functions, complementary cumulative distribution functions, box plots), (ⅴ) sensitivity analysis (i.e. scatterplots, regression analysis, correlation analysis, rank transformations, searches for nonrandom patterns), and (ⅵ) analyses involving stochastic (i.e. aleatory) and subjective (i.e. epistemic) uncertainty. Published by Elsevier Science Ltd.
收起
摘要 :
Evidence theory is widely regarded as a promising mathematical tool for epistemic uncertainty analysis. However, the heavy computational burden has severely hindered its application in practical engineering problems, which is esse...
展开
Evidence theory is widely regarded as a promising mathematical tool for epistemic uncertainty analysis. However, the heavy computational burden has severely hindered its application in practical engineering problems, which is essentially caused by the discrete uncertainty quantification mechanism of evidence variables. In this paper, an efficient epistemic uncertainty analysis method using evidence theory is proposed, based on a probabilistic and continuous representation of the epistemic uncertainty presented in evidence variables. Firstly, each evidence variable is equivalently transformed to a Johnson p-box which is a family of Johnson distributions enveloped by the CDF bounds. Subsequently, the probability bound analysis is conducted for the input Johnson p-box and the response CDF based on monotonicity analysis. Finally, the CDF bounds of the response are directly calculated using the CDF bounds of the input Johnson p-boxes, by which a high computational efficiency is achieved for the proposed method. Two mathematical problems and two engineering applications are presented to demonstrate the effectiveness of the proposed method. (C) 2018 Elsevier B.V. All rights reserved.
收起